Within an enterprise, teams are turning to cloud services for varied reasons. As more organizations move to multi-cloud, there’s a growing need to simplify the multi-cloud landscape. Leveraging multiple public clouds at the same time often presents significant networking challenges.
Data’s being used with increasing fluidity. Business units, tech teams, and clients, alike, are tapping into an enormous and rapidly growing universe of data. Within an enterprise, teams are turning to cloud services for varied reasons. As more organizations move to multi-cloud, there’s a growing need to simplify the multi-cloud landscape.
When networked efficiently with centralized data management, multi-cloud can provide an interconnected repository that offers all the benefits of public cloud: easily migrating workloads, reducing risks and costs, and accelerating innovation. Data that’s accessible by all public clouds at the same time, over low-latency connections, allows users to leverage the best cloud tools (including SaaS and PaaS services) to access and analyze data. Yet all too frequently, orchestrating data across clouds is cumbersome. Here’s a look at the main networking challenges and how to overcome them.
Without Multi-Cloud Capability, Networking Complexity Grows
Leveraging multiple public clouds at the same time often presents significant networking challenges. Consider orchestrating a workload scaling data processing event, then bringing that data back to a shared repository. Without a cohesive multi-cloud implementation, common hurdles include:
- Job management: Without a common storage repository, you have separate database instances across various engines. If you’re working across three different public clouds, you’ll have three job engines, each creating additional challenges.
- Scratch space & output collation: Limited performance, based on what’s available within that cloud, requires manual collation of data. You’ll have data endpoints or results captured in GCP, Azure, or AWS, which means your team will need to figure out how to bring data together and leverage it as a single data set again.
- Source data: Storing duplicate copies of data in each cloud leads to higher storage costs and network charges, driving the need for data orchestration.
- Availability of resources: Resources may be limited because the instances deployed will be specific to that public cloud. In some cases (such as seasonal pushes or GPU-enabled instances), instances are not highly available. This reduces the ability to tap into a particular public cloud when you’re in a single cloud workspace.
- Cost arbitrage: When generating workloads across multiple public clouds, the goal is to do it in the most cost-effective way. But spot instances can change price frequently. Relying on a single cloud limits your opportunity to save money, taking you hostage to the prices of the day or having to choose to simply turn it off until prices become affordable.
Simplified Multi-Cloud Networking
Multi-cloud data services can simplify multi-cloud networking by offering a single data repository that’s simultaneously available across clouds. By presenting data to multiple clouds at the same time, enterprises can leverage the most suitable cloud services from any cloud. This scenario eliminates cloud vendor lock-in and the storage expenses associated with keeping multiple copies of the same data.
Let’s look at an example where the grid engine master runs on top of Azure and also manages GCP and AWS endpoints through the same backing data repository. All workloads are available and reachable through the same single engine master inside of Azure and allow you to grab GPU resources from several resource pools. This configuration provides tremendous scale with the ability to hit more than 1 million CUDA cores across all the different instance types.
This also provides the ability to write back to a common data platform without writing into each particular cloud, then collating the results. By creating a common, single namespace across all different public clouds, you can leverage, automate, and move workloads between clouds without changing things like core file availability and the mount points that are on the back end of them. By making it easy to attach data sets, you can scale bandwidth using methods like scaling network tunnels or managing maximum configuration parameters on VNet gateways, virtual gateways, or cloud routers (depending on which cloud you’re working with).
Benefits of Streamlined Multi-Cloud Networking
Whether you’re using data sets that only address one of the public clouds or where three public clouds are interconnected, this architecture supports innovation through the best-in-class services inside each public cloud. Multi-cloud networking makes job management easier, by running instances and managing endpoints from a single location, with one shared file system and one engine to rule them all. With the results of data processing being delivered back to a single repository (rather than separate destinations), tremendous scratch space is made available.
The ability to have a data set staged and running on the public cloud that makes the most sense for your organization (e.g., based on instance type availability, GPU edition, or the ability to scale to the number you need), cost arbitrage also improves. Arbitrage on spot instance pricing, for example, can hedge against the risk of limited availability of cloud server instances by having all three public clouds as options for location of an app or environment.
The complexity of moving data too often means that organizations say “no” to the ability to tap into innovation. Efficient multi-cloud networking can provide cost-effective, enterprise-scale solutions for large data sets as your enterprise grows.
This article has been published from the source link without modifications to the text. Only the headline has been changed.
Source link