A few years back I chaired a cloud summit in the Dell Services organization for our portfolio of SaaS products, mostly acquired from four acquisitions. The goal of the summit was to develop strategies toward cloud-enabling the different product lines, most of which had been architected before the cloud. The height of the summit for me was when we had an open discussion on how to leverage the cloud within our products. The discussion seesawed back and forth as people raised the issues of using the cloud for their existing workloads - e.g. the need for consistently low network latency, physical proximity between certain clusters, high speed and predictable IOPS, and low storage costs. Among the 20+ architects and development leads included a few cloud evangelists, who argued the counterpoints. But by the end of the discussion, the pervasive opinion of the attendees was clear: in addition to being both extremely high risk and high cost to cloud-enable these product lines, almost every product likely required a ground up re-write.

It was then that I realized how pervasive the tendency is for software systems to inadvertently couple themselves to the non-stated functional constraints of their underlying infrastructure. For example, a system developed for physical hardware with predictably low latency between two clusters, will almost certainly not function well in an environment with periodic network instability. A system developed to access storage with consistent IOPS, will likely behave very badly when the IOPS is either reduced or has intermittent volatility. While the same can be said of a cloud architecture, it has one advantage over physical infrastructures: systems designed for an adverse computing environment such as the cloud are typically forced to architect for higher resiliency, thereby reducing their inherent non-stated functional constraints of the system.

As the cloud moves toward mainstream adoption, so to are the case studies of the companies that forsake the cloud in order to achieve lower costs, better stability, and/or more control. While I would not claim the cloud in 2013 is the right solution for all business applications and cost constraints, I do have a word of caution for those who are considering leaving the cloud: be careful not to take a one-way off ramp that will preclude you from harnessing the cloud in the future.

Whether you believe that cloud computing is a disruptive or incremental innovation, the implications of this inherent tendency can be acute. Those who think the cloud is a disruptive innovation in computing - such as time sharing in 1960s, the personal computer in the 1980s, client-server in the 1990s - believe that over time all workloads will become more effective and cost efficient in the cloud. If they are right, an off ramp from the cloud to physical infrastructure will result in a legacy architecture that will eventually require a ground up rewrite to re-enable for the cloud.

Those who think cloud computing is an incremental innovation - such as minicomputers in the 1970s or laptops in the 1990s - believe the cloud is useful for only some workloads. If their thinking prevails, the off ramp still has the potential to close you out over time from the incremental innovations of the cloud (e.g. burst capacity, auction-based pricing, infrastructure agility, on-demand pricing).

So if you plan to move from the cloud to physical infrastructure, plan carefully, taking the time to tightly define the functional constraints of your system to ensure your software does not permanently couple itself to its new physical infrastructure.