All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

The True Cost of Printing

Of all of the things that are handled by your technology support department, printing is likely the one that you think about the least.  Printing isn’t fancy or exciting or a competitive advantage.  It is a lingering item from an age without portable reading devices, from an era before monitors.  Printers are going to be around for a long time to come, I do not wish to imply that they are not, but there is a lot to be considered when it comes to printers and much of that consideration can be easily overlooked.

When considering the cost of printing we often calculate the cost of the printer itself along with the consumables: paper and ink.  These things alone rack up a pretty serious per-page cost for an average business.  Planning for an appropriate lifespan and duty cycle of a printer are critical to making printing remain cost effective.  And do not forget the cost of parts replacement as well as stock-piled ink and paper.  These may seem minor, but printers often cause an investment in inventory that is never recovered.  When the printer dies, supplies for that printer are often useless.

The big, hidden cost of printing is none of these things. The big cost is in supporting the printers, both upfront with the initial deployment but even moreso in continuing support.  This is especially true in a smaller shop where the trend is to use many small printers rather than fewer large ones.  Deploying and supporting a five thousand dollar central office printer is no more than, and possibly lower than, the cost of deploying a two hundred dollar desktop inkjet.  The bigger the printer the better the support in drivers and support from the vendor that can usually be expected making normal support tasks easier and more reliable.

At a minimum, rolling out a new desktop printer is going to take half an hour.  Realistically it is far more likely to take closer to an hour.  Go ahead, count up the time: time to deliver printer to station, time to unpack printer, time to physically set up printer, time to plug in printer, time to install printer drivers and software, time to set up printer and time to print a test page.  If it was a one time race, you could probably do these steps pretty quickly.  But printer support is not a production line and rarely, if ever, do you have someone with these exact steps being performed in a rapidly repeatable manner.  Likely installing a printer is a “one off” activity that requires learning the new printer, tracking down the current driver and troubleshooting potential issues.

An hour to deploy a two hundred dollar printer could add fifty percent to the cost of the printer quite easily.  There are a lot of factors that can cause this number to skyrocket from a long travel distance between receiving location and the desk to missing cables to incompatible drivers.  Any given printer could take the better part of a day to deploy when things go wrong.  We are not even considering “disruption time” – that time in which the person receiving the printer is unable to work since someone is setting up a printer at their workstation.

Now that the printer has been set up and is, presumably, working just fine we need to consider the ongoing cost of printer support.  It is not uncommon for a printer to sit, undisturbed, for years chugging along just fine.  But printers have a surprisingly high breakage rate caused by the nature of ink, the nature of paper, a propensity for printers to be reassigned to different physical locations or for the machine to which they are attached to be changed or updated introducing driver breakage.  Add these things together and the ongoing support cost of a printer can be surprisingly high.

I recently witnessed the support of a company with a handful of high profile printers.  In a run of documentation, physical cabling and driver issues the printers were averaging between four and eight hours of technician time, per printer, to set up correctly.  Calculate out the per hour cost for that support and those printers, likely already costly, just became outrageously expensive.

I regularly hear of shops that decide to re-purpose printers and spend many times the cost of the printers in labor hours as older printers are massaged into working with newer computer setups or vice versa. Driver incompatibility or unavailability is far more common than people realize.

Printers have the additional complication of being used in many different modes such as directly attached to a workstation, directly attach and shared, directly attached to a print server, directly attached to the network or attached to a print server over the network.  While this complexity hardly creates roadblocks it does significantly slow work done on printers in a majority of businesses.

Printers, by their nature, are very difficult to support remotely.  Getting a print driver installed remotely is easy.  Knowing that something has printed successfully is something completely different.  Considering that printer support should be one of the lower cost support tasks this need for physical on-site presence for nearly every printer support task dramatically increases the cost of support if only because it increases the time to perform a task and receive appropriate feedback.

When we take these costs and combine them with the volume of printing normally performed by a printer we can start to acquire a picture of what printing is really costing.  The value to centralized printing suddenly takes on a new level of significance when seen through the eyes of support rather than through the eyes of purchasing.  Even beyond centralizing printing when possible it is important to eliminate unnecessary printing.

Good planning, strategic purchasing and a holistic approach can mitigate the potential for surprise costs in printing.

 

Just Because You Can…

I see this concept appear in discussions surrounding virtualization all of the time.  This is a broader, more general concept but virtualization is the “hot, new technology” facing many IT organizations and seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently.  As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do and not blindly attempt to make our decisions based on popular deployment methodologies or worse, myths..

Virtualization itself, I should point out, I feel should be a default decision today for those working in the x64 computing space with systems being deployed sans virtualization only when a clear and obvious necessity exists such as specific hardware needs, latency sensitive applications, etc.  Baring any specific need, virtualization is free to implement from many vendors and offers many benefits both today and in future-proofing the environment.

That being said, what I often see today is companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems.  This it certainly is not.  Virtualization is a very important tool to have in the IT toolbox and one that we will reach for very often, but it does not solve every problem and should be treated like every other tool that we posses and used only when appropriate.

I see several things recurring when virtualization discussions come up as a topic.  Many companies today are moving towards virtualization not because they have identified a business need but because it is the currently trending topic and people feel that if they do not implement virtualization that somehow they will be left behind or miss out on some mythical functionality.  This is generally good as it is increasing virtualization adoption, but it is bad because good IT and business decision making processes are being bypassed.  What happens is often that in the wave of virtualization hype IT departments feel that not only do they have to implement virtualization itself but do so in ways that may not be appropriate for their business.

There are four things that I often see tied to virtualization, often accepted as virtualization requirements, whether or not they make sense in a given business environment.  These are server consolidation, blade servers, SAN storage and high availability or live failover.

Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it.  Clearly consolidation is a great benefit for nearly all deployments (mileage may vary, of course) and is nearly always able to be achieved simply through better utilization of existing resources.  It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation and it is not uncommon to see datacenter footprints decimated in larger organizations.

In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question.  These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment.  But these shops can still virtualize “in place” systems on a one to one basis to gain other benefits of virtualization today and look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future.  It is important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.

Blade servers are often seen as the choice for virtualization environments.  Blades may play better in a standard virtualization environment than they do with more traditional computational workloads but this is both highly disputable and not necessarily applicable data.  Being a good scenario for blades themselves does not make it a good scenario for a business.  Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.

Blades needs to be evaluated using the same harsh criteria when virtualizing as when not and, very often, they will continue to fail to provide the long term business value needed to choose them over the more flexible alternatives.  Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.

One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN.  This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization which, if they don’t require SAN, benefit greatly from it.  The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend.  It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.

But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own.  If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage.  There are many cases where local storage backed virtualization is an ideal deployment scenario.  There is no need to dismiss this approach without first giving it serious consideration.

The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system.  Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us.  However, few companies needed high availability at this level prior to implementing virtualization and the price tag of the necessary infrastructure and software to do it with virtualization is often so high as to make it too expensive to justify.

High availability systems are complex and often overkill.  It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place.  I see companies moving towards high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements.  The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime.  With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well.  This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity.  Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether.  But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss simply because the marketing literature promoted the concept.

Like anything, virtualization and all of the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them.  If the individual feature does not make sense for your business do not assume that you have to purchase or implement that feature.  Many organizations virtualize but use only a few, if any, of these “assumed” features.  Don’t look at virtualization as a black box, look at the parts and consider them like you would consider any other technology project.

What often happens in a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed.  Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost.  Even if high availability features are not purchased the decision to use SAN might already be made and fail to be revisited after changes to the plan are made.  It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.

This concept does not stop at virtualization.  Extend it to everything that you do.  Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.

Spotlight on SMB Storage

Storage is a hard nut to crack.  For businesses storage is difficult because it often involves big price tags for what appear to be nebulous gains.  Most executives understand the need to “store” things and more of them but they understand very little about performance, access methods, redundancy and risk calculations, backup and disaster recovery.  This makes the job of IT difficult because we need to explain why budgets need to be often extremely large for what appears to be an invisible system to the business stakeholders.

For IT, storage is difficult because storage systems are complex – often the single most complex system within an SMB – and often, due to their expense and centralization, exist in very small quantities within a business.  This means that most SMBs, if they have any storage systems, have only one and keep it for a very long time.  This lack of broad exposure to storage systems combined with the relatively infrequent need to interact with storage systems leaves SMB IT departments dealing with a large budget item of incredible criticality to the business that is a small percentage of their “task” range and over which they actually have very little experience by the very nature of the beast.  Other areas of IT are far more accessible for experimentation, testing and education purposes.

Between these two major challenges we are left with a product that is poorly understood, in general, by both management and IT.  Storage is so misunderstood that often IT departments are not even aware of what they need at all and often are doing little more than throwing darts at the storage dart board and starting from wherever the darts land – and often starting by calling vendors rather than consultants leading them down a path of “decision already made” while seemingly getting advice.

Storage vendors, knowing all of this, do little to aid the situation since once contact between an SMB and a vendor is made it is in the vendor’s best interest not to educate the customer since the customer already  made the decision to approach that vendor in the first place before having the necessary information at hand.  So the vendor simply wants to sell whatever they have available.  Seldom does a single storage vendor have a wide range of products in their own lines so going directly to a vendor before knowing what exactly is needed can go much, much farther towards the customer having effectively already decided on what to buy than in other arenas of technology and this can cause costs to be off by orders of magnitude compared to what is needed.

Example: Most server vendors offer a wide array of servers both in the x64 family as well as large scale RISC machines and other, niche products.  Most storage vendors offer a small subset of storage products offering only SAN or only NAS or only “mainframe” class storage or only small, non-replicated storage, etc.  Only a very few vendors have a wide assortment of storage products to meet most needs and even the best of these lack full market scale hitting the smaller SMB market as well as the mid and enterprise markets.

So where do we go from here?  Clearly this is a serious challenge to overcome.

The obvious option, and one that shops need to not rule out, is turning to a storage consultant.  Someone who is not reselling a solution or, at the very least, is not reselling a single solution but has a complete solution set from which to choose and who is going to be able to provide a lot cost, $1,000 solution as well as a $1,000,000 solution – someone who understands NAS, SAN, scale out storage, replication, failover, etc.  When going to your consultant do not make the presumption that you know what your costs will be – there are many, many factors and by considering them careful you may be able to spend far less than you had anticipated.  But do have budgets in mind, risk aversion well documented, costs for downtime and a very complete set of anticipated storage use case scenarios.

But turning to a consultant is certainly not the only path.  Doing your own research, learning the basics and following a structured decision making process can get you, if not to the right solution, at least a good way down the right path.  There are four major considerations when looking at storage: function (how storage is used and accessed), capacity, speed and reliability.

The first factor, function, is the most overlooked and the least understood.  In fact, even though this is the most basic of concerns, this is often simply swept under the carpet and forgotten.  We can answer this question by asking ourselves “Why are we purchasing storage?”

Let us address this systematically.  There are many reasons that we will be buying storage.  Here are a few popular ones: to lower costs over having large amounts of storage locally on individual servers or desktops, to centralize management of data, to increase performance and to make data more available in the case of system failure.

Knowing which of these factors, or if there is another factor not listed here, driving you towards shared storage is important as it will likely provide a starting point in your decision making process.  Until we know why we need shared storage we will be unable to look at the function of that storage, which, as we know already, is the most fundamental decision making factor.  If you cannot determine the function of the storage then it is safe to assume that shared storage is not needed at all.  Do not be afraid to make this decision, the vast majority of small businesses have little or no need for shared storage.

Once we determine the function of our shared storage we can now, relatively easily, determine capacity and performance needs.  Capacity is the easiest and most obvious function of storage.  Performance, or speed, is easy to state and explain but much more difficult to quantify as IOPS are, at best, a nebulous concept and at worst completely misunderstood.  IOPS come in different flavours and there are concerns around random access, sequential access, burst speeds, latency and sustained rates and then comes the differences between reading and writing!  It is difficult to even determine the needed performance let alone the expected performance of a device.  But with careful research, this is achievable and measurable.

Our final factor is reliability.  This, like functionality, seems to be a recurring stumbling point for IT professionals looking to move into shared storage.  It is important, nay, absolutely critical, that the idea that storage is “just another server” be kept in mind and the concepts of redundancy and reliability that apply to normal servers apply equally to dedicated shared storage systems.  In nearly all cases, enterprise storage systems are built on enterprise servers – same chassis, same drives, same components.  What is oft confusing is that even SMBs will look to mid or high end storage systems to support much lower end servers which can sometimes cause storage systems to appear mystical in the same way that big iron servers may appear to someone only used to commodity server hardware.  But do not be mislead, the same principles of reliability apply and you will need to gauge risk exactly the same as you always have (or should have) to determine what equipment is right for you.

Taking time to assess, research and understand storage needs is very important as your storage system will likely remain as a backbone component on your network for a very long time due to its extremely high cost and complexity of replacing.  Unlike the latest version of Microsoft Office, buying a new shared storage system will not cause a direct impact on an executive’s desktop and so lack the flash necessary to drive “feature updates” as well.

Now that we have our options in front of us we can begin to look at real products.  Based on our functionality research we now should be able to determine if we are in need of SAN, NAS or neither.  In many cases – far more than people realize – neither is the correct choice.  Often adding drives to existing servers or attaching a DAS drive chassis where needed is more cost effective and reliable than doing something more complex.  This should not be overlooked.  In fact, if DAS will suit the need at hand it would be rare that something else would make sense at all.  Simplicity is the IT manager’s friend.

There are plenty of times when DAS will not meet the current need.  Shared storage certainly has its place, even if only to share files between desktop users.  With today’s modern virtualization systems shared storage is becoming increasingly popular – although even there DAS is too likely avoided even when it might suit well the existing needs.

With rare exception when shared storage is needed NAS is the place to turn.  NAS stands for Network Attached Storage.  NAS mimics the behaviour of a fileserver (NAS is simply a fileserver packaged as an appliance) making it easy to manage and easy to understand.  NAS tends to be very multi-purposed replacing traditional file servers and often being used as the shared backing for virtualization.  NAS is typified by the NFS and CIFS protocols but we will not uncommonly see HTTP, FTP, SFTP, AFS and others available on NAS devices as well.  NAS works well as a connector allowing Windows and UNIX systems to share files easily with each other while only needing to work with their own native protocols.  NAS is commonly used as the shared storage for VMWare’s vSphere, Citrix XenServer, Xen and KVM.  With NAS it is easy to use your shared storage in many different roles and easy to get good utilization from your shared storage system.

NAS does not always meet our needs.  Some special applications still need shared storage but cannot utilize NAS protocols.  The most notable products affected by this are Microsoft’s HyperV, databases and server clusters.  The answer for these products is SAN.  SAN, or Storage Area Networking, is a difficult concept and even at the best of times is difficult to categorize.  Like NAS which is simply a different way of presenting traditional file servers, SAN is truly just a different way of presenting direct attached disks.  While the differences between SAN and DAS might seem obvious actually differentiating between them is nebulous at best and impossible at worst.  SAN and DAS typically share protocols, chassis, limitations and media.  Many SAN devices can be attached and used as a DAS.  And most DAS devices can be attached to a switch and used as SAN.  In reality we typically use the terms to refer to their usage scenario more than anything else.

SAN is difficult to utilize effectively for many reasons.  The first is that it is poorly understood.  SAN is actually simple – so simple that it is very difficult to grasp making it surprisingly complex.  SAN is effectively just DAS that is abstracted, re-partioned and presented back out to hosts as DAS again.  The term “shared storage” is confusing because while SAN technology, like NAS, can allow for multiple hosts to attach to a single storage system it does not provide any form of mediation for hosts attached to the same filesystem.  NAS is intelligent and handles this making it easy to “share” shared storage.  SAN does not, it is too simple.  SAN is so simple that what in effect happens is simply that a single hard drive (abstracted as it may be) is wired into controllers on multiple hosts.  Back when shared storage meant attaching two servers to a single SCSI cable this was easy to envision.  Today with SAN’s abstractions and the commonality of NAS most IT shops will forget what SAN is doing and disaster can strike.

SAN has its place, to be sure, but SAN is complex to use and to administer and very limiting.  Often it is very expensive as well.  The rule of thumb with SAN is this: unless you need SAN, use something else.  It is that simple.  SAN should be avoided until it is the only option and when it is, it is the right option.  It is rarely, if ever, chosen for performance or cost reasons as it normally underperforms and out costs other options.  But when you are backing HyperV or building a database cluster nothing else is going to be an option for you.  For most use cases in an SMB, using SAN effectively will require a NAS to be placed in front of it in order to share out the storage.

NAS makes up the vast majority of shared storage use scenarios.  It is simple, well understood and it is flexible.

Many, if not most, shared storage appliances today will handle both SAN and NAS and the difference between the two is in their use, protocols and ideology more than anything.  Often the physical devices are similar if not the same as are the connection technologies today.

More than anything it is important to have specific goals in mind when looking for shared storage.  Write these goals down and look at each technology and product to see how or if they meet these goals.  Do not use knee-jerk decision making or work off of marketing materials or what appears to be market momentum.  Start by determining if shared storage is even a need.  If so, determine if NAS meets your needs.  If not, look to SAN.  Storage is a huge investment, take the time to look at alternatives, do lots of research and only after narrowing the field to a few, specific competitive products – turn to vendors for final details and pricing.

Apple’s Roadmap for iOS

Guessing at a company’s roadmap is always a dangerous venture.  In the case of Apple today and their iOS family of products, it feels less like predicting a roadmap and more like computing a trajectory.  Apple has some serious, game changing strategy already in motion and seeing where they intend to take it seems pretty reliable.  I know that many industry pundits have covered this ground as it has been a very popular topic as of late, but I wanted to add my own voice and viewpoint to the discussion.

Over the past several years Apple has been making a lot of seemingly disconnected and questionable decisions around their purchases, research and product releases.  Each piece, seen individually, makes little sense to the outside observer.  Taken together, however, we are piecing together a picture of what appears to be grand design and careful planning.

Rapidly Apple’s fortunes have shifted from its traditional desktop market (Mac OSX) to its portable device market (iOS.)  This began, innocuously, with the iPod and slowly turned into the iPhone, iPad and, most recently, the AppleTV.  The AppleTV is the really interesting player here as this device in its first iteration was based on OSX but in its second iteration became an iOS product.  Apple actually morphed a product from one line into the other.  Very telling.

The most interesting piece of the iOS puzzle, to me, is the App Store.  The App Store seems like little more than a neat way to funnel end user funds into Apple’s ample pockets and, on the surface, it certainly was a huge success in that area.  However, the App Store represents far more than a simple attempt at increasing profit margins.  No the App Store has brought a paradigm shift to the way that end users acquire, install and manage applications.  This shift is nothing new to the technical world of Linux desktop users who have long had simple software acquisition systems that the App Store mimics but the App Store brings the ease of use of Linux’s package management to the mainstream market and does so with a revenue model that does wonders for Apple at the same time.

The App Store makes the entire process of discovering and acquiring new software nearly painless for their customers which encourages those customers to buy more apps, more often.  Traditionally computer owners buy software very infrequently.  Even with the ease of Internet downloads the rate at which software is purchased is relatively low due to complexity caused by differences between download sites, concerns over compatibility, concerns over security and quality and the need to establish a transactional relationship with the software company to facilitate payment.  The App Store solves all of those issues and also makes finding new software much easier as there is a central repository which can be searched.  By doing this, Apple’s customers are purchasing software at an incredible pace.

Apple has many reasons to look more favorably upon its iOS product family than its more traditional products.  The old Mac lineup is, in reality, just another PC in a commodity market.  While OSX has some interesting features compared to Windows it is hardly a majorly differentiated product and with Linux rapidly cutting into the PC market in the netbook and alternative computing device space there is less and less room for OSX to play in.  The iOS devices, running on Apple’s own A4 processor, offer Apple the unique opportunity to engineer their products from the ground up as a completely controlled vertical stack – they control every significant piece of hardware and software giving them unprecedented control.  This control can be leveraged into awesome stability and integration as well as profit as few outside vendors are looking for their piece of the pie.

A fully integrated hardware and operating system stack also gives Apple’s development partners an opportunity to leverage their skills to the fullest – just as video game console developers know that underpowered consoles will often outperform desktop PCs simply because the developers have an opportunity to really tweak the code just for that one, stable device.  iOS offers this in a different environment.  Unlike developing for Android or Windows Phones, iOS offers a highly stable and well known ecosystem for developers to code against allowing them to leverage more of the platform with less effort.

The iOS devices, being based on a highly efficient operating system and being built on a very low power consumption platform designed for mobility, offer significant “green” advantages over many traditional devices.  This could be Apple’s new niche.  The power user market is all but lost and Apple quietly bowed out of their long-forgotten server market this past January.  This takes Apple to the other side of the spectrum entirely, but one where Apple seems to really understand what is needed and what their market wants.  Rather than being niche, Apple is poised to be a dominant player, and there is no denying that lower power consumption “green” devices will only continue to be important in the future.

In short order, Apple is going to be in a position to control an entire ecosystem ranging from mobile computing platforms, mobile telephony, fixed television-attached media devices and, with only minor effort, desktop computing.  Desktop computing may seem like an odd place for the iOS system to go, but if we really think about what Apple is developing here, it makes perfect sense.  The transition won’t be overnight, but it is sure to come.

The first step of the transition is hard to see but it involved the AppleTV.  The AppleTV 2.0 is an iOS device that is non-mobile working its way into peoples’ homes.  Currently it is designed to function purely as a media center device, but all of the iOS functionality is there, dormant, waiting for the day when Apple decides to release an app interface and AppleTV App Store loaded with apps controlled via wireless remote, BlueTooth keyboard or whatever input device Apple decides to provide for the AppleTV.  The only things keeping the AppleTV from becoming a full fledged iOS-based desktop today is a lack of USB into which to attach keyboard and mouse and Apple’s reluctance to provide a desktop environment and App Store for the AppleTV.  The foundation is there and ready to be activated.

In reality, we are early on in the iOS lifecycle and while the platform that Apple has chosen is very mature for mobile devices it is extremely underpowered for a desktop experience.  Each generation brings more computing power to the platform, however, and in very short order a desktop based on a later revision Apple processor and iOS may easily exceed the average user’s desktop expectations.  Most home users find their desktops today to be significantly overpowered for their basic needs of email, web browsing, watching Netflix and YouTube, etc.  These are tasks for which many people are switching to their iPads already.  In another generation or two of processors we may see an AppleTV-like device that draws only four or five watts of power able to adequately power the average user’s desktop computing needs.

The second step is in the newly added App Store appearing in Mac OSX.  The addition of the App Store to the Mac platform means that the beginning of the transition is underway.  Incumbent Mac users are now being introduced to the concept of finding software, acquiring it and installing it all through a simple, integrated system just as iPhone and iPad users have been using for years now.  Had the App Store and all of its cost and limitations been introduced to users and developers on the Mac first it would have likely been shunned and faded away without real comment.  But today the Mac landscape is far different.

The plan, as I see it, with the Mac platformed App Store is to begin centralizing critical apps for the Mac ecosystem into the App Store.  Over the next two to three years this process is likely to see all major apps move in this direction leaving only smaller, less popular apps out to be handled through the traditional purchase and install system.  Once a critical mass of apps has been reached and the iOS hardware platform has matured to a point where the speed is adequate for daily desktop computing tasks Apple will flip the switch and change out the Mac OSX desktop for a new iOS desktop that is either a sister of the AppleTV or, potentially, they will simply use the AppleTV device itself encouraging Apple users to see the world of desktop computing and media delivery as one – not as unlikely as some might think given the combination of the two so common on iOS mobile devices today.

An iOS desktop could be very attractive to home users.  Many businesses might be willing to jump at the chance to move to well polished, low power consumption devices for their non-power user staff.  Those needing more power might look to use them as little more than thin clients as well.  There are many options around such a low cost device – low cost to purchase and low cost to operate.  As many companies are already forced to implement iOS management for their existing iPad and iPhone devices, adding in iOS desktop devices might be a trivial matter.  Apple has conquered many of the hurdles that it faced with Mac OSX for the iOS platform before they’ve even announced plans to make such a desktop device.

The laptop space, where Apple has a strong foothold today, is possibly the easiest platform to migrate.  The iPad is almost a full fledged laptop today.  All Apple needs to do is to add a hinge and a keyboard and they would have a device that works like an iPad but looks like the Macbook Air.  An easy transition likely to be heralded by Apple and its users alike.

Apple excels at subversive technology.  The iPod and iPhone, and to some extent now the iPad, snuck into the market as media players or phones but emerged as highly mobile computing devices used for all sort of tasks and spurred on by the success of social media.  But they sneakily did one more thing – in only a few years time the iPod Touch went from being a MP3 player and email device to being one of the most popular mobile video game platforms making Nintendo shake and basically removing Sony from the game altogether.  No one bought the iPod Touch with the intent of making it their new, primary video game device, but it happened and the iPod is an excellent video game platform that is only just beginning to see its own potential.  The iPad is following close in its stead.  It is not necessarily that the iOS platforms are the best possible mobile video game devices but that they are purchased for other purposes and are “good enough” for most of the gaming population.  What the Wii wanted to be for consoles, the device that brought non-gamers into the gaming fold, the iPod truly did for mobile gaming.

The AppleTV is now perfectly poised to do the same thing that the iPod did for mobile gaming for the console market.  As more and more game makers focus on the iOS platform it will become increasingly apparent that the AppleTV, sitting already attached to many television monitors all over the world, is a video game console already purchased and ready to go.  What the Wii did in the last generation for the console the AppleTV is ready to do for the next.  Nintendo already proved that the largest segment of the video gaming market is primarily casual gamers who are not significantly concerned with having the latest, most powerful platform or the best games.

The AppleTV could provide an even less expensive gaming console with more features than the Wii that is far more attractive for developers who can utilize the same resources that they use to make games for all of Apple’s other iOS platforms.  Almost overnight, Apple has made the basis for a video gaming ecosystem that can rival nearly any in existence today.  And, of course, in time the AppleTV platform will get more and more powerful – slowly catching up to the more expensive video game consoles making it increasingly eligible as a serious platform contender for hard core console gamers.

Apple has a lot of pokers in the iOS fire but, if executed correctly, the potential is immense.

It will take a few years for Apple to completely phase out the long standing Mac family and users will be resistant, if only for nostalgic reasons, and Apple has a few versions of Mac OSX up their sleeves yet, but I believe that the march towards a unified platform under the iOS banner is inevitable.  iOS represents the future, not only for Apple but for much of the industry.  Lower power consumption, ease of use and a minimum of different parts between many different devices.  I, for one, am very excited to see what Apple can do with such a tightly integrated ecosystem and believe that Apple has more opportunity to do great things with iOS than it ever did with the Mac platform.  This could truly be a dawning of great things for Apple and a paradigm shift for end users.