The term “cloud” in IT today evokes many responses, feelings and ideas on its purpose, makeup, and overall value to an organization. Some believe “cloud” is exclusive to IT services fully residing in an externally owned and run data center, in which an organization rents resources through an on-demand model. Some people believe cloud to be the creation of IT as a service within the organization with the hopes of creating true utility computing. And some people think cloud is nothing more than hyperbole, clever marketing and vendors trying to hock more of their gear to unsuspecting punters.
I’m here to tell you that in my humble opinion “cloud” is none of this, and all of the above at the same time. Sound contradictory? It is, and isn’t… Confusing? It doesn’t have to be.
To lend credence to my approach at explaining cloud, a bit of a history lesson may be needed (so bear with me). In the heady days when the mainframe ruled the corporate IT landscape, the idea was to provide a centralized computing model that could allocate resources to services as need be. Mainframes provided a stable, highly available and scalable platform that IT could count on to run and support an entire business. While proprietary, expensive and about as easy to manage as a room of 30 toddler’s is, it provided stability, some flexibility and the needed resources for even the most demanding services that companies required to be competitive.
The industry chugged along, mainframes were the way to go and Unix was king… Then it all changed with the creation of the personal computer and the seemingly unstoppable rise of Microsoft as the choice for many business productivity applications. The reason for this shift away from the mainframe was that the MS stack relied on a distributed, client-server model, where each operating system and application had its own physical server to reside on. This gave many small software companies the ability to develop their offerings in a more cost effective way, and provide innovative tools to help companies be more competitive in their respective industries.
As the distributed computing model grew and boomed, it became increasingly evident that while it was an easy to adopt model, it was inefficient by nature. This was due to the fact that enterprises had hundreds, if not thousands of servers running a multitude of applications, but with a majority of these servers running at utilization levels below 10%. This meant that expensive data center real estate and overhead associated with power and cooling was being wasted, not to mention all the memory and CPU cycles sitting idle that were paid for up front.
Everyone knows where this part of the story is going… VMware… A company that developed a way to abstract the physical resources on a single server so that multiple operating systems and application could reside on one physical box, with total software isolation and the ability to share the physical resources amongst them. This allowed the consolidation of thousands of servers to hundreds, and hundreds into several dozen.
VMware managed to reduce data center overhead costs, while providing better resource utilization than companies had seen in more than a decade (some more than ever recorded). Over the next several years VMware perfected its suite, and added a slew of high availability and resource efficiency gains to round out its product offering and move further into the enterprise, with the hopes of convincing clients to virtualize their mission critical applications (something few did in the initial phase).
At this more advanced stage of virtualization clients are managing hundreds, if not thousands of virtual machines, and while their staff no longer have to run across the data center floor all day to service physical machines that are hundreds of meters away from each other, many clients are saying the complexity is greater than when they had only physical assets to manage. Virtual server sprawl, along with complex storage management and the cumbersome provisioning process associated with the creation of new services has been hindering clients from really getting the ROI from server virtualization they had originally planned on in their TCO calculations and subsequent business cases to upper level management.
Finding a way to help bring these tasks together under one umbrella in the hopes of having IT teams work together and collaborate, as well as streamline the process and reduce its time has lead many companies to look at doing what was done in the mainframe era, fostering an environment that provides “IT as a service”… Or more simply put a dynamic set of resources, managed centrally, but that can be allocated on demand to provide services to end users as they need them.
If this all sounds like a bit of a circular conversation, to a certain extent it is… As “The Cloud” in its truest form is nothing more than a way to make your existing virtualized environment work together in an efficient and collaborative way, so that companies can create an elastic set of resources that can be provisioned, re-provisioned and constantly optimized to ensure they provide on demand IT services to their end users.
So to come back to my earlier assertion that the cloud can be ANY and ALL OF the three mentioned areas isn’t as crazy as one may have thought when this somewhat long winded story began…
As long as your environment and its devices and management tools provide away to collaborate efficiently so that new services can be created with as much ease and speed as possible, it no longer really matters what the color of your cloud is.