Apple’s Roadmap for iOS

Guessing at a company’s roadmap is always a dangerous venture.  In the case of Apple today and their iOS family of products, it feels less like predicting a roadmap and more like computing a trajectory.  Apple has some serious, game changing strategy already in motion and seeing where they intend to take it seems pretty reliable.  I know that many industry pundits have covered this ground as it has been a very popular topic as of late, but I wanted to add my own voice and viewpoint to the discussion.

Over the past several years Apple has been making a lot of seemingly disconnected and questionable decisions around their purchases, research and product releases.  Each piece, seen individually, makes little sense to the outside observer.  Taken together, however, we are piecing together a picture of what appears to be grand design and careful planning.

Rapidly Apple’s fortunes have shifted from its traditional desktop market (Mac OSX) to its portable device market (iOS.)  This began, innocuously, with the iPod and slowly turned into the iPhone, iPad and, most recently, the AppleTV.  The AppleTV is the really interesting player here as this device in its first iteration was based on OSX but in its second iteration became an iOS product.  Apple actually morphed a product from one line into the other.  Very telling.

The most interesting piece of the iOS puzzle, to me, is the App Store.  The App Store seems like little more than a neat way to funnel end user funds into Apple’s ample pockets and, on the surface, it certainly was a huge success in that area.  However, the App Store represents far more than a simple attempt at increasing profit margins.  No the App Store has brought a paradigm shift to the way that end users acquire, install and manage applications.  This shift is nothing new to the technical world of Linux desktop users who have long had simple software acquisition systems that the App Store mimics but the App Store brings the ease of use of Linux’s package management to the mainstream market and does so with a revenue model that does wonders for Apple at the same time.

The App Store makes the entire process of discovering and acquiring new software nearly painless for their customers which encourages those customers to buy more apps, more often.  Traditionally computer owners buy software very infrequently.  Even with the ease of Internet downloads the rate at which software is purchased is relatively low due to complexity caused by differences between download sites, concerns over compatibility, concerns over security and quality and the need to establish a transactional relationship with the software company to facilitate payment.  The App Store solves all of those issues and also makes finding new software much easier as there is a central repository which can be searched.  By doing this, Apple’s customers are purchasing software at an incredible pace.

Apple has many reasons to look more favorably upon its iOS product family than its more traditional products.  The old Mac lineup is, in reality, just another PC in a commodity market.  While OSX has some interesting features compared to Windows it is hardly a majorly differentiated product and with Linux rapidly cutting into the PC market in the netbook and alternative computing device space there is less and less room for OSX to play in.  The iOS devices, running on Apple’s own A4 processor, offer Apple the unique opportunity to engineer their products from the ground up as a completely controlled vertical stack – they control every significant piece of hardware and software giving them unprecedented control.  This control can be leveraged into awesome stability and integration as well as profit as few outside vendors are looking for their piece of the pie.

A fully integrated hardware and operating system stack also gives Apple’s development partners an opportunity to leverage their skills to the fullest – just as video game console developers know that underpowered consoles will often outperform desktop PCs simply because the developers have an opportunity to really tweak the code just for that one, stable device.  iOS offers this in a different environment.  Unlike developing for Android or Windows Phones, iOS offers a highly stable and well known ecosystem for developers to code against allowing them to leverage more of the platform with less effort.

The iOS devices, being based on a highly efficient operating system and being built on a very low power consumption platform designed for mobility, offer significant “green” advantages over many traditional devices.  This could be Apple’s new niche.  The power user market is all but lost and Apple quietly bowed out of their long-forgotten server market this past January.  This takes Apple to the other side of the spectrum entirely, but one where Apple seems to really understand what is needed and what their market wants.  Rather than being niche, Apple is poised to be a dominant player, and there is no denying that lower power consumption “green” devices will only continue to be important in the future.

In short order, Apple is going to be in a position to control an entire ecosystem ranging from mobile computing platforms, mobile telephony, fixed television-attached media devices and, with only minor effort, desktop computing.  Desktop computing may seem like an odd place for the iOS system to go, but if we really think about what Apple is developing here, it makes perfect sense.  The transition won’t be overnight, but it is sure to come.

The first step of the transition is hard to see but it involved the AppleTV.  The AppleTV 2.0 is an iOS device that is non-mobile working its way into peoples’ homes.  Currently it is designed to function purely as a media center device, but all of the iOS functionality is there, dormant, waiting for the day when Apple decides to release an app interface and AppleTV App Store loaded with apps controlled via wireless remote, BlueTooth keyboard or whatever input device Apple decides to provide for the AppleTV.  The only things keeping the AppleTV from becoming a full fledged iOS-based desktop today is a lack of USB into which to attach keyboard and mouse and Apple’s reluctance to provide a desktop environment and App Store for the AppleTV.  The foundation is there and ready to be activated.

In reality, we are early on in the iOS lifecycle and while the platform that Apple has chosen is very mature for mobile devices it is extremely underpowered for a desktop experience.  Each generation brings more computing power to the platform, however, and in very short order a desktop based on a later revision Apple processor and iOS may easily exceed the average user’s desktop expectations.  Most home users find their desktops today to be significantly overpowered for their basic needs of email, web browsing, watching Netflix and YouTube, etc.  These are tasks for which many people are switching to their iPads already.  In another generation or two of processors we may see an AppleTV-like device that draws only four or five watts of power able to adequately power the average user’s desktop computing needs.

The second step is in the newly added App Store appearing in Mac OSX.  The addition of the App Store to the Mac platform means that the beginning of the transition is underway.  Incumbent Mac users are now being introduced to the concept of finding software, acquiring it and installing it all through a simple, integrated system just as iPhone and iPad users have been using for years now.  Had the App Store and all of its cost and limitations been introduced to users and developers on the Mac first it would have likely been shunned and faded away without real comment.  But today the Mac landscape is far different.

The plan, as I see it, with the Mac platformed App Store is to begin centralizing critical apps for the Mac ecosystem into the App Store.  Over the next two to three years this process is likely to see all major apps move in this direction leaving only smaller, less popular apps out to be handled through the traditional purchase and install system.  Once a critical mass of apps has been reached and the iOS hardware platform has matured to a point where the speed is adequate for daily desktop computing tasks Apple will flip the switch and change out the Mac OSX desktop for a new iOS desktop that is either a sister of the AppleTV or, potentially, they will simply use the AppleTV device itself encouraging Apple users to see the world of desktop computing and media delivery as one – not as unlikely as some might think given the combination of the two so common on iOS mobile devices today.

An iOS desktop could be very attractive to home users.  Many businesses might be willing to jump at the chance to move to well polished, low power consumption devices for their non-power user staff.  Those needing more power might look to use them as little more than thin clients as well.  There are many options around such a low cost device – low cost to purchase and low cost to operate.  As many companies are already forced to implement iOS management for their existing iPad and iPhone devices, adding in iOS desktop devices might be a trivial matter.  Apple has conquered many of the hurdles that it faced with Mac OSX for the iOS platform before they’ve even announced plans to make such a desktop device.

The laptop space, where Apple has a strong foothold today, is possibly the easiest platform to migrate.  The iPad is almost a full fledged laptop today.  All Apple needs to do is to add a hinge and a keyboard and they would have a device that works like an iPad but looks like the Macbook Air.  An easy transition likely to be heralded by Apple and its users alike.

Apple excels at subversive technology.  The iPod and iPhone, and to some extent now the iPad, snuck into the market as media players or phones but emerged as highly mobile computing devices used for all sort of tasks and spurred on by the success of social media.  But they sneakily did one more thing – in only a few years time the iPod Touch went from being a MP3 player and email device to being one of the most popular mobile video game platforms making Nintendo shake and basically removing Sony from the game altogether.  No one bought the iPod Touch with the intent of making it their new, primary video game device, but it happened and the iPod is an excellent video game platform that is only just beginning to see its own potential.  The iPad is following close in its stead.  It is not necessarily that the iOS platforms are the best possible mobile video game devices but that they are purchased for other purposes and are “good enough” for most of the gaming population.  What the Wii wanted to be for consoles, the device that brought non-gamers into the gaming fold, the iPod truly did for mobile gaming.

The AppleTV is now perfectly poised to do the same thing that the iPod did for mobile gaming for the console market.  As more and more game makers focus on the iOS platform it will become increasingly apparent that the AppleTV, sitting already attached to many television monitors all over the world, is a video game console already purchased and ready to go.  What the Wii did in the last generation for the console the AppleTV is ready to do for the next.  Nintendo already proved that the largest segment of the video gaming market is primarily casual gamers who are not significantly concerned with having the latest, most powerful platform or the best games.

The AppleTV could provide an even less expensive gaming console with more features than the Wii that is far more attractive for developers who can utilize the same resources that they use to make games for all of Apple’s other iOS platforms.  Almost overnight, Apple has made the basis for a video gaming ecosystem that can rival nearly any in existence today.  And, of course, in time the AppleTV platform will get more and more powerful – slowly catching up to the more expensive video game consoles making it increasingly eligible as a serious platform contender for hard core console gamers.

Apple has a lot of pokers in the iOS fire but, if executed correctly, the potential is immense.

It will take a few years for Apple to completely phase out the long standing Mac family and users will be resistant, if only for nostalgic reasons, and Apple has a few versions of Mac OSX up their sleeves yet, but I believe that the march towards a unified platform under the iOS banner is inevitable.  iOS represents the future, not only for Apple but for much of the industry.  Lower power consumption, ease of use and a minimum of different parts between many different devices.  I, for one, am very excited to see what Apple can do with such a tightly integrated ecosystem and believe that Apple has more opportunity to do great things with iOS than it ever did with the Mac platform.  This could truly be a dawning of great things for Apple and a paradigm shift for end users.

Do You Really Need Redundancy: The Real Cost of Downtime

Downtime – now that is a word that no one wants to hear.  It strikes fear into the heart of businesses, executives and especially IT staff.  Downtime costs money and it causes frustration.

Because downtime triggers an emotional reaction businesses are often left reacting to it differently than traditional business factors.  This emotional approach causes businesses, especially smaller businesses often lacking in rational financial controls, to treat downtime as being far worse than it is.  It is not uncommon to find that smaller businesses have actually done more financial damage to themselves reacting to a fear of potential downtime than the feared downtime would have inflicted had it actually occurred.  This is a dangerous overreaction.

The first step is to determine the cost of downtime.  In IT we are often dealing with rather complex systems and downtime comes in a variety of flavors such as loss of access, loss of performance or a complete loss of a system or systems.  Determining every type of downtime and its associated costs can be rather complex but a high level view is often enough for producing rational budgets or are, at the very least, a good starting point on a path towards understanding the business risks involved with downtime.  Keep in mind that just like spending too much to avoid downtime is bad that spending too much to calculate the costs of downtime is bad.  Don’t spend so much time and resources determining if you will lose money that you would have been better off just losing it.  Beware of the high cost of decision making.

We can start by considering only complete system loss.  What is the cost of organizational downtime for you – that is, if you had to cease all business for an hour or a day how much money is lost?  In some cases the losses could be dramatic, like in the case of a hospital where a day of downtime would result in a loss of faith and future customer base and potentially result in lawsuits.  But in many cases a day of downtime might have nominal financial impact – many businesses could simply call the day a holiday, let their staff rest for the day and have people work a little harder over the next few days to make up the backlog from the lost day.  It all comes down to how your business does and can operate and how well suited you are for mitigating lost time.  Many business will only look at daily revenue figures to determine lost revenue but this can be wildly misleading.

Once we have a rough figure for downtime cost we can then consider downtime risk.  This is very difficult to assess as good figures on IT system reliability are nearly non-existent and every organization’s systems are so unique that industry data is very nearly useless.  Here we are forced to rely on IT staff to provide an overview of risks and, hopefully, a reliable assessment of likelihoods of individual risks.  For example, in big round numbers, if we had a line of business application that ran on a server with only one hard drive then we would expect that sometime in the next five to ten years that there will be downtime associated with the loss of that drive.  If we have that same server with hot swap drives in a mirrored array then the likelihood of downtime associated with that storage system, even over ten years, is quite small.  This doesn’t mean that a drive is not likely to fail, it is, but that the system is likely to be unaffected until redundancy is restored without end users noticing that anything has happened.

Our last rough estimation tool is to apply applicable business hours.  Many businesses do not run 24×7, some do, of course, but most do not.  Is the loss of a line of business application at six in the evening equivalent to the loss of that application at ten in the morning?  What about on the weekend?  Are people productively using it at three on a Friday afternoon or would losing it barely cost a thing and make for happy employees getting an extra hour or two on their weekends?  Can schedules be shifted in case of a loss near lunch time?  These factors while seemingly trivial can be significant.  If downtime is limited to only two to four hours then many businesses can mitigate nearly all of the financial impact simply by asking employees to have a little flexibility in their schedules to accommodate the outage by taking lunch early or leaving work early one day and working an extra hour the next.

Now that we have these factors  – the cost of downtime, the ability to mitigate downtime impact based on duration and the risks of outage events we can begin to draw a picture of what a downtime event is likely to look like.  From this we can begin to derive how much money it would be worth to reduce the risk of such as event.  For some businesses this number will be extremely high and for others it will be surprisingly low.  This exercise can expose a great deal about how a business operates that may not be normally all that visible.

It is important to note at this point that what we are looking at here is a loss of availability of systems, not a loss of data.  We are assuming that good backups are being taken and that those backups are not compromised.  Redundancy and downtime are not topics related to data loss, just availability loss.  Data loss scenarios should be treated with equal or greater diligence but are a separate topic.  It is a rare business that can survive catastrophic data loss but common to experience and easily survive even substantial downtime.

There are multiple ways to stave off downtime, redundancy is highly visible and treated almost like a buzz word and so receives a lot of focus, but there are other means as well.  Good system design is important, avoiding system complexity can heavily reduce downtime simply by removing points of unnecessary risk and fragility.  Using quality hardware and software is important as well – as low end hardware that is redundant will often fail just as often as non-redundant enterprise class hardware.  Having a rapid supply chain of replacement parts can be a significant factor often seen in the form of four hour hardware vendor replacement part response contracts.  This list goes on.  What we will focus on is redundancy which is where we are most likely to overspend when faced with the fear of downtime.

Now that we know the costs of failing to have adequate redundancy we can compare this potential cost against the very real, up front cost of providing that redundancy.  Some things, such as hard drives, are highly likely to fail and relatively easy and cost effective to make redundant – taking significant risk and trivializing it.  These tend to be a first point of focus.  But there are many areas of redundancy to consider such as power supplies, network hardware, Internet connections and entire systems – often made redundant through modern virtualization techniques providing new avenues for redundancy previously not accessible to many smaller businesses.

New types of redundancy, especially those made available through virtualization, are often a point where businesses will be tempted to overspend, perhaps dramatically, compared to the risks of downtime.  Worse yet, in the drive to acquire the latest fads in redundancy companies will often implement these techniques incorrectly and actually introduce greater risk and a higher likelihood of downtime compared to having done nothing at all.  It is becoming increasingly common to hear of businesses spending tens or even hundreds of thousands of dollars in an attempt to mitigate a downtime monetary loss of only a few thousand dollars – and to then fail in that attempt and end up increasing their risk anyway.

When gauging the cost of mitigation it is critical to remember that mitigation is a guaranteed expense where risk is only a risk.  Much like auto insurance where you pay a guaranteed small monthly fee in order to fend off a massive, unplanned expense.   The theory of risk mitigation is to spend a comparatively small amount of money now in order to reduce the risk of a large expense later, but if the cost of mitigation gets too high then it becomes better to simply accept the risks.

Systems can be assessed individually, of course.  Keeping a web presence and telephone system up and running at all times is far more important than an email system where even hours of downtime are unlikely to be detectable by external clients.  Paying only to protect those systems where the cost of downtime is significant is an important strategy.

Do not be surprised if what you discover is that beyond some very basic redundancy (such as mirrored hard drives) that a simple network design with good backups and restore plans and a good hardware support contract is all that is needed for the majority, if not all, of your systems.  By lowering the complexity of your systems you make them naturally more stable and easier to manage – further reducing the cost of your IT infrastructure.

Patching in a Small Environment

In enterprise IT shops, system patching is a complicated process involving large numbers of test systems which mirror production systems so that each new patch arriving from operating system and software vendors can be tested in a real world environment to see how they interact with the hardware and software combinations available in the organization.  In an ideal world, every shop would have a managed patching process that immediately responded to newly published patches, tested instantly and applied as soon as the patch was deemed safe and applicable.  But the world is not an ideal one and in real life we have to make due with limited resources: physical, temporal and financial.

Patches are generally released for a few key reasons: security, stability, performance and, occasionally, to supply new features.  Except for the addition of new features, which is normally handled through a different release process, patches represent a fix to a known issue.  This is not a “if it is not broken, don’t fix it” scenario but is a “it is broken and has not completely failed yet” scenario which demands attention – the sooner the better.  Taking a “sit back and wait” approach to patches is unwise as the existence of a new patch means that malicious hackers have a “fix” to analyze and even if an exploit did not exist previously, it will very shortly.  The release of the patch itself can be the trigger for the immediate need for said patch.

This patch ecosystem creates a need for a “patch quickly” mentality.  Patches should never sit, they need to be applied often as soon as they are released and tested.  Waiting to patch can mean running with critical security bugs or keeping systems unnecessarily unreliable.

Small IT shops rarely, if ever, have test environments whether for servers, networking equipment or even desktops.  Not ideal but, realistically, even if those environments were available few small shops have the excess human IT resources available to run those tests in a timely manner.

This is not as bleak as it sounds.  The testing done for most patches is redundant with patching already tested by the vendor.  Vendors cannot possibly test every hardware and software interaction that could ever happen with their products but they generally test wide ranges of permutations and look at areas where interactions are most likely.  It is rare for a major vendor to cripple their own software with bad patches.  Yes, it does happen and having good backups and rollback plans are important, but in day to day operations, patching is a relatively safe process that is far more important to do promptly than it is to wait for opportunities that may or may not occur.

Like any system change, patches are best applied in frequent, small dosages.  If patches are applied promptly then normally only one or a few patches must be applied at the same time.  For operating systems you may still have to deal with multiple patches at one time, especially if patching only weekly, but seldom must you patch dozens or hundreds of files at one time when done in this manner.  When done like this it is vastly easier to evaluate patches for adverse affects and to roll back if a patch process goes badly.

The worst scenario for a small business lacking a proper patch testing workflow is to wait on patches.  Waiting means that systems go without needed care for long periods of times and when patches are finally applied it is often in large, bulk patch processes.  Applying many patches at once increases the chances that something will go wrong and, when it does, identifying which patch(es) is at fault and producing a path to remediation can be much more difficult.

Delayed patching is a process that provides little or no advantage to either IT or a business but does carry substantial risk to security, stability and performance.  Best practices for patching in a small environment is either to allow systems to self patch as quickly as possible or to schedule a regular patching process, perhaps weekly, during a time when the business is most prepared for patching to fail and patch remediation to be handled.  Whether you choose to patch automatically or simply to do so regularly through a manual process, patch often and promptly for best results.