All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

IT in a Bubble

It is an old story in SMB IT, IT managers who get their start young, stay with a single company, work their way through the ranks and become venerable IT managers who have never worked outside of their current environment.  Just like the “good old days” when people stuck with a single company for their entire careers, this too sounds like a wonderful thing.  But IT has long rewarded “job hoppers”, those technically minded folk who move from shop to shop every few years.  The lack of direct upward mobility within single shops has encouraged this process – incremental promotions could only be found between companies, seldom within a single one.

Some people support and some people dispute the idea that there is value, or significant value, to be had by changing companies.  The idea is that by moving between environments you will glean techniques, procedures, processes and general experience that you will then bring with you to your next position – that you are a cumulative product of all of your past environments.  This concept, I believe, has some merit, moreso in technology than in other fields.

In technology fields, I believe that the value of moving between jobs, after a reasonable amount of time, is generally of much better value than is staying put.  The reason for this is relatively simple: Most small businesses lack an ecosystem of support and training for IT professionals. It is well known that IT professionals, working in small shops, lack the interaction with peers and vendors generally accepted as necessary for healthy professional development and which is common in enterprise shops.

An IT professional, after spending many years in a small shop, effectively all alone, tends to feel isolated lacking the professional interaction that most specialists enjoy.  Most small professional or artisan shops have a number of specialists who work together, share research and experience, are encouraged to work with competitors or vendors, to attend trade events, training, etc.  Few fields share the odd dispersion of IT professionals with only one or two people working together at any given company with little to no interaction with the outside world or with peers at other companies.

This isolation can lead to “IT insanity” if left unchecked.  An IT professional, working in a vacuum with little to no technical or professional feedback, will lose the ability to assess themselves against other professionals.  As often the sole provider of technology guidance and policy for potentially years or even decades, a lone IT professional can easily “drift off course” and lose contact and course correction from the larger IT field with only light guidance offered through the filtered world of vendors attempting to sell expensive products and services.

IT professionals suffering from “IT insanity” will often be found implementing bizarre, nonsensical policies that would never be tolerated in a shop with a strong peer-review mechanism, purchasing incredibly overpriced solutions for simple problems and working either completely with or completely without mainstream technologies – mostly dependent upon individual personality.  Partially this is caused by an increasing dependence on a singular, established skill set as the lack of environmental change encourages a process of continuing dependence on existing skills and procedures.

IT insanity will commonly arise in IT shops that have only a single IT professional or in shops where there is a strict hierarchy with no movement at the management ranks so that fresh ideas and experience from younger professionals do not feed up into the managers and instead established practices and “because I said so” policies are forced down the chain to the technologists actually implementing solutions.

This is not to say that all is lost.  There are steps that can be taken to avoid this scenario.  The first is to consider outsourcing IT – any shop so small as to face this dilemma should seriously consider if having full time, dedicated internal staff makes sense in their environment.  Looking for fresh blood is an option – getting IT professionals from other shops and even other industries can work wonders.  Some shops will even trade staff back and forth in extreme cases to keep from losing existing employees but seeking to “mix things up.”

Short of drastic measures such as changing employees entirely, non-IT organizations need to think seriously about the professional health of their staff and look to opportunities for peer interaction.  IT professionals need continuous professional interaction for many reasons and organizations need to actively support and promote this behavior.  Sending staff to training, seminars, peer groups, conventions, shows or even out as volunteers to non-profit and community activities where they can provide IT support in an alternative environment can do wonder for getting them out of the office and face to face with alternative viewpoints and get their hands on different technologies than they see in their day to day lives.

IT managers need opportunities to explore different solution sets and to learn what others are doing in order to best be able to offer objective, broad-based decision making value to their own organizations.

IT Managers and the Value of Decision Making

When I was new to IT I can remember people using the phrase “No one ever got fired for buying IBM.”  At the time I was young and didn’t think too much about what this phrase implies.  Recently, I heard this phrase again – except this time it was “No one ever gets fired for buying Cisco” and soon thereafter I heard it applied to virtualization and VMWare.   This time I stopped to think about what exactly I was being told.

At face value, the statement comes as little more than an observation, but the intent runs much deeper.  The statement is used as a justification for a decision that has been made and implies that the decision was made not because the product or vendor in question was the best choice but because it was the choice that was believed to have the least risk involved for the decision maker.  Not the least risk or most value for the organization – least risk to the decision maker.

This implies one of two possibilities.  The first being that the decision maker in question, presumably an IT manager, feels that due diligence and careful analysis is not recognized or rewarded by the organization. That marketing, by an IT vendor to non-IT management, has convinced management that those products and services are superior without consideration for functionality, cost, reliability or service.

The second possibility is that the IT decision maker believes that they can get away without performing the cost, risk and functionality analysis which would be deemed proper for deciding between competing options and believes that by picking a popular option, well known in the marketplace, that they will be shielded from serious inquiry into their processes and simply deliver what sounds like a plausible solution with minimal effort on their part.

As IT Managers, one of the most crucial job functions that we perform is in identifying, evaluating and recommending products and solutions to our organizations.  The fact that phrases like these are used so commonly suggests that a large percentage of IT managers and advisers are deciding to forgo the difficult and laborious process of researching products and solutions and are banking on making an easy decision that is likely to seem reasonable to management, regardless of whether or not it is a viable solution, let alone the best one for the organization.  The assumption being that a very expensive product will be chosen when potentially a less expensive or less well known option might have worked as well or better and in some extreme cases a product may be recommended using this method that does not even provide for the needs of the organization at all.

IT lives and dies by the decision making value that it brings to the organization.  We hate to admit it, but finding people who can fix desktops is not that hard and the economic value of someone who can fix anything wrong on the desktop versus simply rebuilding one is small.  If we eliminate quality decision analysis from the IT manager’s skill set, what value does he or she bring to the company?

State of Thin Clients

The IT world loves to swing back and forth between moving processing out to the user via fat clients and moving processing back to the server leaving users with thin clients.  The battle is a long running one that started with the first appearance of multiuser computer systems several decades ago and has continued to this day and will likely continue for a very long time to come.

When I began working in IT, thin clients were simple text terminals attached to a single, central server via serial connections.  Limited to very basic text input these served their purpose at the time to provide relatively low cost computing to a large number of users.  The system wasn’t pretty or glamorous, but it was quite functional.

These ancient terminals gave way to the personal computer and computing power shifted from the datacenter to the desktop allowing users to run powerful apps like Lotus 1-2-3 and WordPerfect.  Responsive graphical applications were a powerful draw for decentralized processing.  Users were enthralled with the new usability.  The text terminal went into very rapid decline.

Eventually centralized power was available in such quantities and at such a low price point that graphical applications could be run with almost as much responsiveness from the server while clients could be “thin” needing just a shim of an operating system – enough to provide remote access back to the server.  Thin computing became the darling of the industry again and the term itself arose and moving towards centralized processing again came into vogue.

Administrators love the central computing model because data and configuration remains in one place.  Backups and management are a breeze.  The idea, at least in theory, is that in doing so desktop support becomes a non-issue with all desktop clients being nothing more than commodity components that can be replaced anytime with completely interchangeable parts.  Since nothing is stored or configured on the desktop there is nothing to support there.

In the initial swings of the “thin computing pendulum” the market movement was dramatic.  When text terminal computing first became available this was practically the only model used in the real world.  The value was so dramatic that no one could really justify doing anything else.  When the PC was introduced the movement to the fat client was so ubiquitous that many younger IT professionals today have never actually seen text terminals in use even though the move to fat “PC” clients was not as all encompassing as the move to text terminals had been one pendulum swing previous.

The PC model was generally better for end users because it mimicked how they used computers at home – those that had computers at home.  It also gave them more options for customization and, for better or for worse, opportunity for them to begin installing software of their own rather than only that software preconfigured for them on the central server.

Over time there have been a lot of developments from both camps giving each more and more advantages of the other.  Central domain services such as Microsoft’s Active Directory have come along allowing central management to extend out to fat clients bringing control and management more in line with traditional thin computing models.  Likewise, companies like Citrix have worked very hard developing new technologies that allow thin clients to perform much more like robust fat clients making their use as seamless as possible for end users and even making offline use possible for laptop users.

Most shops today have adopted hybrid models.  Fat clients where they make sense and thin clients for certain categories of users and for remote workers and continuity of business scenarios.

Over the past decade we have seen a shift in the way that business applications are created and deployed.  Today almost all business applications are web-based and have no client platform dependency.  This affords IT departments of today with a potential new opportunity – to shift from a traditional thin client platform – that requires remote graphical access – to the browser as the new thin client platform.

The move to web apps has happened slowly and most businesses have a rather large legacy codebase on which they are quite dependent that cannot be easily transferred to the new web app architecture and some apps simply are not good candidates for this architecture.  But by and large the majority of new business applications are web based, written most often in Java or .NET, and these apps are prime candidates for a new thin computing model.

If our custom business apps are available via the browser then our only commonly used apps that remain holding us back are the traditional productivity apps such as our office suites that are widely used by nearly all staff today (if they have a computer at all.)  Very few desktop apps are actually pervasive except for these.  Increasingly we are seeing browser-based alternatives to the traditional office suites.  Everyone is very aware of Google Apps as a pioneer in this area with Microsoft now offering online MS Office as well.  But the popular offerings making consumer news headlines require businesses to totally rethink long term strategies involving keeping critical business data within their walls and are not likely to be highly disruptive to the enterprise for quite some time.

What does pose a threat to the status quo is other alternative software products such as ThinkFree office which is installed within the organization and used and secured internally just like any other normal business application.  This category of “traditionally installed internal web applications” will allow enterprise IT departments to begin to reconsider their end users’ platforms without having to reevaluate their entire concept of IT in general.  The biggest barriers to this today are lingering business applications and power users using specific desktop apps that cannot be encapsulated within a browser.

One of the great advantages, however, of the browser as the new thin client is how simple it is to mix browser-based apps with traditional apps.  The move is transparent and most large businesses are moving in this direction today even if there is no overarching strategy to do so.  The market momentum to develop all new apps for the web is causing this to happen naturally.

Another key advantage of a completely “web based” architectural model is the great ease with which it can be exposed for users outside of the corporate network.  Instead of using cumbersome VPN clients and company laptops employees can find any web browser, sign in to the company network and have secure business applications delivered to any browser, anywhere.

Bringing this almost unnoticed shift into sharp relief today are a handful of, of all things, consumer devices such as: Apple’s iPhone and iPad and Google’s Android and ChromeOS platforms.  What all of these devices have in common is a focus upon being primarily thin web appliances – thin clients for consumers.  With the majority of consumer computing focused upon web connectivity the need for anything else from a platform is nearly non-existent in the consumer market.  This means that within a very short period of time users who once brought home PC experience to the office as their expectation of a computing environment will soon be beginning to bring web-based thin computing as their new expectation.

When this shift happens IT departments with need to rethink their internal application delivery strategy.  The change doesn’t have to be dramatic if current development trends are used commonly and legacy systems are routinely updated.  In fact, one of the great benefits of this new model is that traditional fat clients function very well as browser platforms and will do so for a very long time to come most likely.  Companies adopting this model will likely be able to slow desktop purchasing cycles and prepare for purchasing some form of traditional thin client with embedded browser or move to a business version of the new Nettop trend that we are beginning to see emerge in the consumer space.  Some businesses may even attempt the rather dangerous path of using consumer devices but the lack of management and security features will likely keep this from being popular in all but rare instances.

I believe, though, that this swing of the pendulum will not be as dramatic as the last one just as it was not as dramatic as the swing before that.  It will be an important trend but IT departments understand more and more that no new technological shift is a silver bullet and that with each new opportunity comes new challenges.  Most IT departments will need to implement some degree of browser-based thin computing over the next few years but most will retain a majority user base of fat clients.  Hybrid environments, like we’ve seen for many years with more traditional models, will continue as before with each technology being used in target areas where they make the most sense.

The one area where thin clients continue to be challenged the most is in mobile computing where disconnected users end up being digitally marooned away from their company networks unable to continue working until network connectivity is reestablished.  This is a significant issue for power users who must travel extensively and need to be able to continue working regardless of their current connectivity.  Today this is being solved in the traditional thin client arena thanks to companies like Citrix who continue to advance the state of the art in thin application delivery.

In the browser-based arena we have had to turn to technologies like Google Gears and Adobe AIR in the past to make this possible but these had poor market penetration.  Coming down the pike, however, is the new HTML 5 Offline API which is set to redefine how the web works for users who need to go “off the grid” from time to time.  With HTML 5 incorporating offline capabilities and a richer feature set into the specification for the web itself we expect to see broad and rapid adoption from all of the leading vendors – most likely even before the draft standard is finalized.  While still quite some ways away this new standard will certainly lay the groundwork for a significant shift towards the browser as a ubiquitous, standard and robust platform.

The future of thin computing looks to be incredibly promising both in the enterprise as well as, for the first time, in the consumer arena as well.  Adoption of thin computing models will be spurred on by the current movement towards Software as a Service models and SaaS adoption will continue to be encouraged by the widespread presence of thin computing devices.  In many ways browser-based thin computing represents the technology aspect that is now maturing in the SaaS arena where SaaS itself is maturing in social acceptance rather than technical feasibility.

Choosing an Email Architecture: Internal or Hosted

If you talk to email specialists what you seem to find, in my small, anecdotal survey of the market, is that half of all of these professionals will tell you to simply install email locally, normally Microsoft Exchange, and the other half will simply tell you to go with a hosted (a.k.a. Software-as-a-Service / SaaS or “in the cloud”) service, most often Google Apps, but email is not such a simple architectural component that it should be distilled to trite answers.  Email is one of the most important components of your business’ communications infrastructure, often surpassing telephony, and choosing the right delivery methodology for your company is critical for your long term success.

We will start by considering some basic factors in email hosting.  Email systems require a good deal of bandwidth, quite a significant amount of storage, high reliability, careful management and significant security consideration.

Bandwidth is the first area to consider.  Every email sent and received must travel between the end user and the email server as well as between the email server itself and the outside world in the case of email destined externally.  In small businesses nearly all email is destined to leave the company network to go to clients, customers, vendors, etc.  In larger enterprises email use changes and as we approach the Fortune 100 email shifts from being almost exclusively a tool for communicating with people outside the organization to being a platform primarily used for internal communications.

This shift in how email itself is used is a very important factor in deciding how to deploy email services.  If email is used almost exclusively internally for intra-staff communications then this will lend itself very well to hosting email systems in-house to increase security and improve WAN bandwidth utilization.  The caveat here being, of course, that a highly distributed company of any size would not keep this traffic on a LAN network and so should be treated as if the email usage is external regardless of whether or not it is intra-staff.  Small companies with communications happening primarily with external users will find better utilization in a hosted service.

Storage is actually often a smaller factor in email architecture decision making than it may at first appear that it should be.  Traditionally email’s storage requirements made a compelling argument for hosting internally due to the cost benefit of keeping large storage, especially that used for archival needs, local.  Recently, large hosted email vendors such as Rackspace and Google Apps have brought the price of online, archival email storage so low that, in many cases, it may actually be more cost effective to utilize hosted storage rather than local storage or, at least, the cost is at parity.  Even long term archival storage can be had very cost effectively in a hosted solution today.

Reliability is a rather complex subject.  Email is critical to any organization.  If an email system goes down many companies simply grind to a halt.  In some cases, the company effectively shuts down when email stops flowing.  Not only do employees stop communicating with each other but customers, vendors, suppliers and others see the company as being offline at best and out of business at worst.  Interrupting communications with the outside world can represent immediate and serious financial impact to almost any business.

Hosted email has the obvious advantage of being hosted in a large, commercial datacenter with redundancy at every level (assuming a top tier vendor) from hardware to storage to networking to power to support.  Hosting email in house requires a business to determine the level of redundancy that is most cost effective given the business’ ability to withstand email downtime and is generally an exercise in compromises – how much reliability can a company do without given the cost necessary to provide it.

Some companies will opt to host email servers at a colocation facility which will provide them with many redundant components but to meet the features of a Rackspace or Google level offering, multiple datacenters would likely be needed.  Colocation is a halfway option providing the technical features of hosted options with the management and flexibility of in-house email systems.

A more common scenario, though, is for companies to host a single email server completely within their walls relying on their internal power, hardware and network connection.  In a scenario like this a company must either take extreme measures to ensure uptime – such as hosting a completely redundant site at immense cost – or front-ending their entire email infrastructure with a reliable online spooling service such as Postini, MessageLabs or MXLogic.  The cost of such services, while critical for the reliability most companies need, is often equal to or even greater than complete email hosting options.  This spooling service cost will likely add an ongoing, scaling cost that will make fully hosted email services always a less expensive option than in-house hosting.

Management cost is very difficult to determine but requires attention.  A fully hosted solution requires relatively little technical knowledge.  Time to manage is low and the skill level necessary to do so is relatively low.  With an in-house solution your company must supply infrastructure, networking, security, system and email skills.  Depending on your needs and your available staff this may be part time for a single professional or it may require multiple FTEs or even outside consultants.  The total time necessary to manage an in-house email system will vary dramatically and is often very hard to calculate do the complex nature of the situation but, at a minimum, it is orders of magnitude greater than a hosted solution.

Security is the final significant consideration.  Beyond traditional system-level security email requires spam filtering.  Handling spam can be done in many ways: in software on the email server, on an appliance located on the local network, farmed out to a spam filtering service or left to the hosted email solution provider.  Spam filtering, if handled internally, is seldom a set and forget service but one that requires regular attention and generally extra cost in licensing and management.

After looking at these main considerations every company should sit down, crunch the numbers, and determine which solution makes the most sense for them on an individual level.  Often it is necessary to use a spreadsheet and play with several scenarios to see what each solution will cost both up front and over time.  This, combined with a valuation of features and their applicability to the company, will be critical in determining the appropriateness of each option.

The secret weapons of the in-house solution are features, integration and flexibility.  In-house email options can be extended or modified to offer exactly the feature set that the organization requires – sometimes at additional cost.  A perfect example of this is Zimbra’s instant messaging integration which can be a significant value-add for an email platform.  This has to be considered in addition to raw cost.  Integration with existing internal authentication mechanisms can be an important factor as well.

In my own experience and cost calculations, hosted solutions represent the vast majority of appropriate solutions in the SMB space due to raw economics while large and enterprise class customers will find insurmountable benefits from the flexibility and internal communications advantages of in-house solutions.  Small businesses struggle mostly with cost while large business struggle primarily with the communications complexity of their scale.  Large businesses also get the best value from in-house solutions due to “professional density” – the inverse number of IT professionals whose time is wasted due to corporate scale inefficiencies.

Today, whether a business chooses to host their own email or to receive email as a service, there are many options from which to choose even once a basic architecture is chosen.  Traditionally only a few in-house options such as MS Exchange and Lotus Notes would be considered but new alternatives such as Zimbra (recently acquired by VMWare,) Scalix and Kerio are expanding the landscape with lower costs, new deployment options and aggressive feature sets.  Hosting’s relative newcomer, and overnight industry heavyweight, Rackspace is drawing a lot of attention with their new email offerings which more closely mimic traditional in-house offerings while Google continues to get attention with their unique GMail services.  I expect to see the hosted email space continue to become more competitive with new integration features being a key focus.

Every business is unique and the whole of the factors must be considered.  Using a combination of business and IT skills is necessary to evaluate the available options and opportunities and no one discipline should be making these decisions in isolation.  This is a perfect example of where IT managers must understand the economics of the business in addition to the technological aspects of the solution.