By Roger Mitan
The cloud started with companies like Amazon, Rackspace, and others as a place for developers to easily spin up and down dev environments to quickly test changes and plan deployments of new updates or new software products. These public clouds quickly evolved to places where entire infrastructures were running, replacing on-premise equipment and capital expenditures with shared resources in the public cloud and operational expenditures.
The cloud then further evolved to include software-only offerings, such as having the ability to simply request a database application or directory service, without the need to worry about the physical or virtual infrastructure. Clients are simply provided access to the application, and the cloud provider handles the underlying architecture.
Through all of these versions of cloud, some companies started to realize it was becoming increasingly difficult to understand and control the costs for all of these services. What used to be simply a fixed cost for a certain size of memory, cpu, and storage, then became iops charges, bandwidth charges, database and application transaction charges, etc. This caused some companies to re-evaluate what was best run in the cloud and what would be better back in their own infrastructure.
These and other factors, such as high-performance demand applications, caused an evolution to private and hybrid clouds. Private clouds typically have cloud interfaces or orchestrators in front of virtual hypervisors, such as VMware or Hyper-V, but with dedicated physical resources and therefore more predictable and controllable costs and performance. However, there is a loss of flexibility and scalability with the private cloud, which leads us to the Hybrid cloud.
Hybrid cloud is a combination of private infrastructure combined with public cloud services. Typically, the higher performance demanding applications, such as databases along with infrastructure components, which don’t need to scale are put into a private cloud or local infrastructure. The remaining pieces are then implemented using public cloud resources. These hybrid cloud configurations can become very complicated to setup and manage, and this is where new advances from the industry come into play.
Many cloud providers have introduced new ways to integrate their public cloud infrastructures with your private cloud and local infrastructure. For instance, Microsoft has introduced Azure Stack. This technology allows you to provide Azure-like services internally on your own or collocated infrastructure while also tying seamlessly into the Azure cloud. Vmware and AWS have introduced VMware Cloud on AWS, which allows you to run VMware workloads on Amazon Web Services and easily shift between the two, all from one graphical interface.
Regional cloud providers, such as BlueBridge Networks, also have tools to allow you to easily deploy hybrid environments. They can provide colocation for your physical gear while providing connections into their shared resources cloud environment. From there they can further facilitate seamless connections into the larger public cloud providers such as Amazon and Azure. This type of deployment can be one of the most flexible options as you have the combination of private, public, and hybrid all managed by a local team of experienced cloud experts to help you manage your environment so you can concentrate on doing business instead of managing your IT infrastructure.
The evolution of the cloud from simple to complex has taken place over a very short time. Don’t let those complexities distract you from taking advantage of all it has to offer. Utilization of new and emerging technologies along with trusted cloud service provider partners can make your path to the cloud clear.
Roger Mitan is the director of engineering with BlueBridge Networks, a downtown Cleveland-headquartered data-center and cloud computing business. He can be reached at (216) 621-2583 and email@example.com.