Intution. Ideas. Innovation. Impact
Optimizing workload and data placement
As companies embark on their application and data modernization programs and look to the cloud and infrastructure required to support their plans, most land on a hybrid cloud strategy with application and data workloads balanced across both public and private clouds. Hybrid deployments combine the hyper-scalers’ public cloud benefits of innovation, speed, consumption and scale with private benefits of regulatory compliance, performance, data gravity and recouping of existing investments. Hybrid also enables increasingly dynamic workload placement over time, allowing them to optimize for performance, service levels, security and compliance, and cost. In developing a hybrid strategy, there are three key factors to consider.
Establishing a primary hyper-scaler is usually the best bet
The hyper-scalers offer significant benefits of agility and scale. But the real value of public cloud lies in the innovation and power of their PaaS solutions, including serverless computing (AWS Lambda or Azure Functions) or new AI capabilities (GCP’s TensorFlow). While digitally decoupled microservices allow companies to mix and match best-in-class PaaS solutions from multiple providers, such an approach does have its limitations. These include having to split and dilute developer skills across platforms, incurring additional operational complexity and cost resulting in a higher total cost of ownership, and limited sharing of large datasets over expensive network connections due to data gravity. Further, by giving a single hyper-scaler 51% or more, the provider can provide better incentives and discounts. For these reasons, it’s usually preferable to select a primary hyper-scale provider to maximize innovation, improve investment in skills, minimize operational complexity, and optimize TCO across the whole of the cloud estate.
Hybrid offers the best of both worlds
Application and data workload placement are determined by a number of business and technical factors. In the public cloud, companies can take advantage of rapid innovation cycles, spin up new environments faster, rapidly scale out deployments and leverage consumption-based OpEx models. Many companies, however, will also need to evolve their data centers into private clouds that replicate many of the attributes and benefits of the hyper-scalers to accommodate other business requirements. For some, this may be driven by regulatory requirements (such as GxP for pharma, HIPAA for healthcare, as well as GDPR). For others, it may be to support business-critical and highly transactional applications/datasets that have significant scale processing requirements – which may be difficult to optimize in public, shared environments. It may often be necessary to co-locate other applications that need to integrate with private cloud applications and large datasets due to latency, bandwidth and cost considerations. Lastly, many companies will still need to recoup existing investments in their data centers and equipment. In addition to these drivers, Hybrid Cloud can also be used to enhance disaster recovery by backing up Private Clouds and data centers in the Public Cloud and potentially arbitrage application and data workload placement vs. the public environment. As a result of these varied needs, many companies look to Hybrid Cloud to optimize application and data workload placement across Public and Private Clouds to maximize innovation, performance and contain costs—offering the “best of both worlds”.
Lines are blurring between public and private clouds
In the past betting on a hyper-scaler meant picking Public over Private. That is no longer the case. To support regulatory, performance, and data gravity requirements, the hyper-scalers are now offering Private Cloud carveouts in Public environments. VMware on AWS (VMC), Azure VMware Services (AVS), and Google’s SAP, Oracle and Bare Metal solutions are good examples. Similarly, the hyper-scalers have been working on Private Cloud extensions, pushing their PaaS and IaaS solutions to their customers’ data centers or even further to support manufacturing and other OT use cases. Examples include Microsoft’s Azure Stack, AWS Outposts, Google’s Anthos, and Alibaba’s Apsara. Additionally, platforms like Red Hat’s Openshift and Cloud Foundry have created what are essentially hybrid environments by introducing a heterogeneous technology layer at the foundational level, enabling connectivity across disparate technology platforms. This blurring of Public and Private under a Hybrid Cloud umbrella is likely to accelerate in the future. Over time, we will no longer see a delineation between “public” and “private” but instead, between “dedicated” and “shared.”